120 research outputs found

    The new man and the new world the influence of Renaissance humanism on the explorers of the Italian era of discovery

    Get PDF
    In contemporary research, microsaccade detection is typically performed using the calibrated gaze-velocity signal acquired from a video-based eye tracker. To generate this signal, the pupil and corneal reflection (CR) signals are subtracted from each other and a differentiation filter is applied, both of which may prevent small microsaccades from being detected due to signal distortion and noise amplification. We propose a new algorithm where microsaccades are detected directly from uncalibrated pupil-, and CR signals. It is based on detrending followed by windowed correlation between pupil and CR signals. The proposed algorithm outperforms the most commonly used algorithm in the field (Engbert & Kliegl, 2003), in particular for small amplitude microsaccades that are difficult to see in the velocity signal even with the naked eye. We argue that it is advantageous to consider the most basic output of the eye tracker, i.e. pupil-, and CR signals, when detecting small microsaccades

    Influence of Hemianopic Visual Field Loss on Visual Motor Control

    Get PDF
    Background: Homonymous hemianopia (HH) is an anisotropic visual impairment characterized by the binocular inability to see one side of the visual field. Patients with HH often misperceive visual space. Here we investigated how HH affects visual motor control. Methods and Findings: Seven patients with complete HH and no neglect or cognitive decline and seven gender- and age-matched controls viewed displays in which a target moved randomly along the horizontal or the vertical axis. They used a joystick to control the target movement to keep it at the center of the screen. We found that the mean deviation of the target position from the center of the screen along the horizontal axis was biased toward the blind side for five out of seven HH patients. More importantly, while the normal vision controls showed more precise control and larger response amplitudes when the target moved along the horizontal rather than the vertical axis, the control performance of the HH patients was not different between these two target motion experimental conditions. Conclusions: Compared with normal vision controls, HH affected patients' control performance when the target moved horizontally (i.e., along the axis of their visual impairment) rather than vertically. We conclude that hemianopia affects the use of visual information for online control of a moving target specific to the axis of visual impairment. The implications of the findings for driving in hemianopic patients are discussed

    Searching with and against each other: Spatiotemporal coordination of visual search behavior in collaborative and competitive settings

    Get PDF
    Although in real life people frequently perform visual search together, in lab experiments this social dimension is typically left out. Here, we investigate individual, collaborative and competitive visual search with visualization of search partners' gaze. Participants were instructed to search a grid of Gabor patches while being eye tracked. For collaboration and competition, searchers were shown in real time at which element the paired searcher was looking. To promote collaboration or competition, points were rewarded or deducted for correct or incorrect answers. Early in collaboration trials, searchers rarely fixated the same elements. Reaction times of couples were roughly halved compared with individual search, although error rates did not increase. This indicates searchers formed an efficient collaboration strategy. Overlap, the proportion of dwells that landed on hexagons that the other searcher had already looked at, was lower than expected from simulated overlap of two searchers who are blind to the behavior of their partner. The proportion of overlapping dwells correlated positively with ratings of the quality of collaboration. During competition, overlap increased earlier in time, indicating that competitors divided space less efficiently. Analysis of the entropy of the dwell locations and scan paths revealed that in the competition condition, a less fixed looking pattern was exhibited than in the collaborate and individual search conditions. We conclude that participants can efficiently search together when provided only with information about their partner's gaze position by dividing up the search space. Competing search exhibited more random gaze patterns, potentially reflecting increased interaction between searchers in this condition

    Zero-Shot Segmentation of Eye Features Using the Segment Anything Model (SAM)

    Full text link
    The advent of foundation models signals a new era in artificial intelligence. The Segment Anything Model (SAM) is the first foundation model for image segmentation. In this study, we evaluate SAM's ability to segment features from eye images recorded in virtual reality setups. The increasing requirement for annotated eye-image datasets presents a significant opportunity for SAM to redefine the landscape of data annotation in gaze estimation. Our investigation centers on SAM's zero-shot learning abilities and the effectiveness of prompts like bounding boxes or point clicks. Our results are consistent with studies in other domains, demonstrating that SAM's segmentation effectiveness can be on-par with specialized models depending on the feature, with prompts improving its performance, evidenced by an IoU of 93.34% for pupil segmentation in one dataset. Foundation models like SAM could revolutionize gaze estimation by enabling quick and easy image segmentation, reducing reliance on specialized models and extensive manual annotation.Comment: 14 pages, 8 figures, 1 table, submitted to ETRA 2024: ACM Symposium on Eye Tracking Research & Application

    What’s bothering developers in code review?

    Get PDF
    The practice of code review is widely adopted in industry and hasbeen studied to an increasing degree in the research community.However, the developer experience of code review has receivedlimited attention. Here, we report on initial results from a mixed-method exploratory study of the developer experience

    LEyes: A Lightweight Framework for Deep Learning-Based Eye Tracking using Synthetic Eye Images

    Full text link
    Deep learning has bolstered gaze estimation techniques, but real-world deployment has been impeded by inadequate training datasets. This problem is exacerbated by both hardware-induced variations in eye images and inherent biological differences across the recorded participants, leading to both feature and pixel-level variance that hinders the generalizability of models trained on specific datasets. While synthetic datasets can be a solution, their creation is both time and resource-intensive. To address this problem, we present a framework called Light Eyes or "LEyes" which, unlike conventional photorealistic methods, only models key image features required for video-based eye tracking using simple light distributions. LEyes facilitates easy configuration for training neural networks across diverse gaze-estimation tasks. We demonstrate that models trained using LEyes are consistently on-par or outperform other state-of-the-art algorithms in terms of pupil and CR localization across well-known datasets. In addition, a LEyes trained model outperforms the industry standard eye tracker using significantly more cost-effective hardware. Going forward, we are confident that LEyes will revolutionize synthetic data generation for gaze estimation models, and lead to significant improvements of the next generation video-based eye trackers.Comment: 32 pages, 8 figure

    Fixation classification: how to merge and select fixation candidates

    Get PDF
    Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5∘), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0∘ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters

    GANDER: a Platform for Exploration of Gaze-driven Assistance in Code Review

    Get PDF
    Gaze-control and gaze-assistance in software development tools have so far been explored in the setting of code editing, but other developer activities like code review could also benefit from this kind of tool support.In this paper, we present GANDER, a platform for user studies on gaze-assisted code review. As a proof of concept, we extend the platform with an assistant that highlights name relationships in the code under review based on gaze behavior, and we perform a user study with 7 participants. While the participants experience the interaction as overwhelming and lacking explicit actions (seen in other similar user studies), the study demonstrates the platform's capability for mobility, real-time gaze interaction, data logging, replay and analysis
    • 

    corecore